33 results
6 - Mathematical and Statistical Methods
- from Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 41-86
-
- Chapter
- Export citation
-
Summary
Statistical downscaling relies on a wide range of statistical concepts. This chapter presents a concise overview of these concepts. We attempt to explain the different issues as simply as possible but with the necessary detail and background to successfully implement a broad range of statistical downscaling methods. Section 6.1 gives an overview of random variables and probability, including the most relevant probability distributions, and some very basic introduction to the modelling of extreme values. Parameter estimation, that is, basis for calibrating statistical models, is introduced in Section 6.2. Here we focus on the widely used concept of maximum likelihood but give a brief overview of Bayesian estimators as well. Regression models, the backbone of many statistical downscaling models, are presented in Section 6.3, including a discussion of statistical model selection and evaluation. Weather generators are specific stochastic processes – these are laid out briefly in Section 6.4. Finally, predictors are often post-processed based on principal component analysis, and some statistical downscaling methods employ canonical correlation analysis. These pattern methods are introduced in Section 6.5. In the following, we will write scalars in normal font, while vectors and matrices will be written in bold font. We assume that readers have basic knowledge in significance testing – it will not be introduced in this chapter but only occurs in the context of model selection. In fact, significance testing is very useful in many contexts, but it is not essential to implement a statistical downscaling method. Also, even though relevant for some predictor transformations, we do not discuss clustering algorithms.
Random Variables and Probability Distributions
Events and Random Variables
In synoptic meteorology, the atmospheric circulation is often characterised by discrete weather types. At each day, the actual circulation might randomly assume one of these types. In statistics, the space of all possible weather types is called sample space or event space S. For the considered weather types, the sample space could be, for example, S = (W,NW,N,NE, E, SE, S,SW). Subsets of the event space, for example, SE, or (NW, N, NE) (i.e. all Northerly weather types) are called events.
11 - Perfect Prognosis
- from Part II - Statistical Downscaling Concepts and Methods
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 141-169
-
- Chapter
- Export citation
-
Summary
State-of-the-art GCMs do not simulate regional-scale climate processes and also have a limited ability to represent resolved meso-scale processes. But GCMs often do have reasonable skill simulating synoptic-scale variability, such as the passage of cyclones and anticyclones. Perfect prognosis (PP) statistical downscaling exploits this skill to simulate regional climate. The approach builds upon synoptic meteorology and climatology, which analyses the relationship between synoptic scale and regional weather (Hewitson and Crane 1996).
DEFINITION 11.1 In PP statistical downscaling, a statistical model that links largescale predictors to local-scale predictands is calibrated to observed data. The statistical model is then applied to predictors simulated by climate models.
Comprehensive reviews about PP statistical downscaling methods for climate change studies have been published by Hewitson and Crane (1996), Wilby and Wigley (1997), Zorita and von Storch (1997), Fowler et al. (2007), Maraun et al. (2010a) and Schoof (2013); region-specific reviews are, for example, Hanssen-Bauer et al. (2005) for Scandinavia and Jacobeit et al. (2014) for the Mediterranean.
In the following, we will first introduce the assumptions underlying the PP approach (Section 11.1). The most widely used PP methods will be presented in Section 11.2. The skill to apply a PP method in a specific context is very much determined by the structure of the method. These issues will be discussed in depth in Section 11.3. Issues related to the use of different predictand variables, in particular impact-relevant predictands, will be summarised in Section 11.4. Crucial for any PP method to work sensibly under climate is a careful predictor choice. Relevant assumptions and potential predictors for temperature and precipitation will be discussed in detail in Section 11.5. Section 11.6 lays out the statistical construction of a PP method, with a focus on climate change applications. A cookbook is finally given in Section 11.7. Several advanced PP methods condition weather generators on large-scale atmospheric predictors. Given the peculiarities of these method types, we will discuss them individually in Chapter 13. The following discussions on assumptions and predictor requirements, however, apply equally to these methods.
2 - Regional Climate
- from Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 9-15
-
- Chapter
- Export citation
-
Summary
Before discussing regional climate modelling in more detail, it is sensible to briefly sketch regional climate itself, and the factors controlling regional climate and climate change.We use the term rather loosely, spanning a range of scales below the continental and synoptic scale. Some may prefer to distinguish between regional and local climate – we decided to use only one term – which scales we refer to should become clear from the context. In fact, we will discuss that it is essential to define the relevant scales for any user case individually. In some situations, a region might be a whole country, in other situations a district or even just a specific valley.
Regional climate is determined by a vast number of climatic processes spanning global to local scales. To successfully model regional climate change, it is essential to successfully model the processes relevant for the specific application. We will therefore sketch these processes in the following, considering the Alps as a showcase.
Large- and Planetary-Scale Processes
The global temperature field is to a first order determined by the influence of latitude and land–sea distribution on the radiative balance, and the different thermal capacity of land and ocean. The Alps are located in the mid-latitudes, in a temperate climate (Figure 2.1, left). The mid-latitudes are a region with a strong meridional temperature gradient and high baroclinic instability, which controls the position of the jet stream and fuels the North Atlantic storm track (Hoskins and Valdes 1990, Lynch and Cassado 2006). Jet streaks, regions of acceleration and divergence in the jet stream, control the genesis of cyclones. The upper-level winds also steer the path of cyclones. Conversely, the passage of cyclones along the storm track drives the jet stream, intricately linking the two phenomena (Woollings 2010). The Alps are located just south of the climatological mean position of the polar front and polar jet in both summer and winter. Cyclones and anticyclones advect different air masses to central Europe. For instance, arctic continental air brings cold and dry air, tropical maritime air is warm and moist, tropical continental air is hot and dry. The jet stream itself follows planetary-scale Rossby waves meandering around the globe.
Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 7-8
-
- Chapter
- Export citation
Part III - Downscaling in Practice and Outlook
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 225-226
-
- Chapter
- Export citation
17 - A Regional Modelling Debate
- from Part III - Downscaling in Practice and Outlook
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 263-268
-
- Chapter
- Export citation
-
Summary
Along with the increasing amount of research in climate downscaling, a critical debate about the limitations and suitability of downscaling has arisen. The debate revolves around essentially two questions: first, are climate models skillful to provide userrelevant information about climate change? Some researchers argue that GCMs cannot provide skillful input to downscaling, and downscaling itself cannot add value to GCM simulations. Second, is downscaling necessary, or could it be avoided by bottomup approaches to decision making? We will review these two discussions and critically comment on the issues raised.
Are Climate Models Fit for Purpose?
Kundzewicz and Stakhiv (2010) discuss whether climate models are “ready for prime time” in climate impact research. They argue that GCMs have originally been developed to advise mitigation policies but are more and more applied also to inform adaptation decisions. Whereas for the former purpose, a broad representation of global climate change is sufficient, the latter requires accurate projections of regional changes, in particular of highly uncertain processes such as the hydrological cycle. The authors argue that climate models are not (yet) skillful for direct application in adaptation planning. This view has been shared by other authors (e.g. Pielke and Wilby 2012); it is based on a series of claims (Kundzewicz and Stakhiv 2010, Pielke and Wilby 2012): first, GCMs do not skillfully include all first order forcings and feedbacks; second, GCMs do not skillfully simulate relevant regional processes such as El Niño; third, GCMs do not reproduce observed trends; and fourth, downscaling cannot improve GCM simulations. The first three claims are related to GCM skill, the fourth to downscaling skill and added value. In the following we will discuss these issues.
Skill of GCMs
Pielke et al. (2009) argue that current GCM projections do not consider all first-order forcings that determine future climate, such as the effect of aerosols or changes in land use and land cover. Over recent years, these aspects have more and more been in the focus of climate research and featured prominently in the most recent IPCC assessment report (Boucher et al. 2013, Myhre et al. 2013).
Part II - Statistical Downscaling Concepts and Methods
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 133-134
-
- Chapter
- Export citation
Appendix B - Useful Resources
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 293-302
-
- Chapter
- Export citation
-
Summary
A dynamic version of these pages will be available at www.cambridge.org/climate research. The authors intend to regularly update these resources for the coming years.
Statistical Downscaling Software Packages and Portals
Here we present (in alphabetical order) a selection of open-source software packages and online portals to perform statistical downscaling. This list is not comprehensive. We may have missed useful and important resources. Moreover, the methods listed here are affected by the limitations discussed throughout the book and should thus be selected and applied carefully in a given context.
downscaleR
R package by the Santander Meteorology group containing functions for MOS such as scaling, and different quantile mapping versions, and PP methods such as linear regression, generalised linear models, weather-type-based downscaling and the analog method.
https://github.com/SantanderMetGroup/downscaleR/
ENSEMBLES Downscaling Portal
Online portal of the Santander Meteorology Group to carry out downscaling. The webpage provides different sets of predictor and predictand data (with upload options) and a range of MOS and PP methods.
https://www.meteo.unican.es/downscaling/ensembles
ESD
R package by Rasmus Benestad, Abdelkader Mezghani and Kajsa Parding. The package contains a range of functions to post-process and analyse large data sets (e.g. in NetCDF format) including PP statistical downscaling based on linear regression and temporal disaggregation.
https://github.com/metno/esd
Rglimclim
R package for a conditional multisite, multivariate weather generator based on generalised linear models developed by Richard Chandler.
http://www.ucl.ac.uk/∼ucakarc/work/glimclim.html
MeteoLab
Matlab toolbox of the Santander Meteorology group for statistical analysis and data mining in meteorology, focusing on statistical downscaling methods.
https://meteo.unican.es/trac/MLToolbox
SDSM
Online statistical downscaling tool by Rob Wilby for perfect prognosis and changefactor weather generators.
http://co-public.lboro.ac.uk/cocwd/SDSM/
qmap
R package by Lukas Gudmundsson for quantile mapping.
https://cran.r-project.org/web/packages/qmap/
Programmes and Initiatives
CMIP
The Coupled Model Intercomparison Project (CMIP) defined a framework to conduct and intercompare simulations with coupled atmosphere–ocean GCMs. The most recent phase is CMIP5 (Taylor et al. 2012).
http://cmip-pcmdi.llnl.gov/
CORDEX
Initiative of the World Climate Research Programme (WCRP) to advance and coordinate the science and application of climate downscaling through global partnerships, in particular to generate large high-resolution multimodel ensembles for all regions of the Earth.
7 - Reference Observations
- from Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 87-95
-
- Chapter
- Export citation
-
Summary
Observations are key to any modelling application; they tie the model to reality. In statistical downscaling, observations are needed for both calibration and evaluation. They are either needed as predictor or predictand data.
In the Global Climate Observation System (GCOS) report to the United Nations Framework Convention on Climate Change (UNFCCC) at the fourth conference of the parties (COP-4; GCOS 1998) it is stated that in practice, available observations often have major deficiencies with respect to climate needs. While considerable progress has been made since then (e.g. Karl et al. 2010), fundamental limitations persist. Observational data, both gridded and station data, should not naively be considered true. All observations are a measurement of a physical quantity, often followed by some postprocessing. As such they are based on some type of model. This is obvious for gridded observations or reanalyses, which are derived via some interpolation or numerical weather prediction model. But also station measurements are based on some technical device (a thermometer inside a shelter, a bucket), which is assumed to correctly represent a meteorological variable (air temperature, precipitation). Therefore, observations are subject to often substantial random and systematic errors. For any useful application of observational data in a given context, it has to be known how it has been generated in order to understand what this data actually represents and what its limitations are.
Predictand Data
Statistical downscaling is often desired to downscale to station data (in particular in PP). There is, however, a growing demand to provide full fields and gridded products.
Station Data
Historical climate observations, in particular before the 1970s, are typically based on station-based measurements. These have in turn been used to calibrate locally operating impact models. To generate projections of the impacts of climate change, impact modellers therefore often demand downscaling to these station locations. Most PP methods in fact attempt to provide such results.
In many regions such as Europe and North America, a dense network of weather stations exists, especially of rain gauges (Figure 7.1). In some regions, however, stations are not regularly shared, are sparse or even no station data exists.
14 - Other Approaches
- from Part II - Statistical Downscaling Concepts and Methods
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 220-224
-
- Chapter
- Export citation
-
Summary
In addition to dynamical downscaling, and the PP and MOS statistical approaches presented in Chapters 11 and 12 there are a number of alternative downscaling methods that combine different approaches. These will now be outlined, and some examples will be given.
Statistical-Dynamical Downscaling and Emulators
There are combinations of dynamical and statistical downscaling that are different from the MOS post-processing. They aim at using the physically based small-scale information from RCMs but without performing computationally expensive RCM runs for every timestep and ensemble member of the GCM simulations to be downscaled. In contrast to standard PP downscaling, which is also an alternative to dynamical downscaling, these approaches are not restricted by the availability of observations for model fitting and provide values for the same region and grid as the RCM. The output is thus spatially complete and can be produced for regions without any observations.
Statistical-Dynamical Downscaling
The first type of this approach is known as statistical-dynamical downscaling. It exploits the fact that many of the large-scale atmospheric states that usually would be used to drive an RCM are similar. The idea is that running the RCM for each situation in a set of similar states creates redundant information and can thus be avoided. Instead, the RCM is only run for typical weather situations, and then the final output is generated by a weighted average of the RCM outputs for the typical situations, with weights determined by the frequency of these weather types in the target period. The approach goes back to wind field studies by Wippermann and Gross (1981) and Heimann (1986) and has been introduced for downscaling of climate models by Frey-Buness et al. (1995). Fuentes and Heimann (2000) have improved the definition of the weather types and demonstrated skill comparable to full RCM simulations for winter precipitation in the Alps for the period 1981–1992. The approach has been used for instance for downscaling windstorm impacts over Germany from climate change simulations (Pinto et al. 2010) and near-surface wind fields from decadal hindcasts and climate change simulations, which in turn were used for wind energy estimates (Reyers et al. 2015).
5 - User Needs
- from Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 33-40
-
- Chapter
- Export citation
-
Summary
Regional climate modelling might be purely discovery driven. Often, however, it is funded and designed to supply users with information. In this chapter, we will lay out the needs of different users in the broader context of climate change adaptation.
Context
The VALUE network (Maraun et al. 2015) has carried out a survey on requirements by users of regional climate change data (Roessler et al. 2017). Information on temperature, precipitation, wind speed, relative humidity and global radiation are demanded by more than 50% of the surveyed users (which were dominated by impact modellers from the water sector). Around 40% demand fields of data at a spatial resolution of 1km or finer, about one third at an hourly temporal resolution, and another 50% at daily resolution. Often users essentially demand “future observations”. As will become clear from the discussions throughout this book, robust information at such a high resolution in both space and time can in general not be provided in the form of time series.
In recent years, however, users are in practice often additionally confronted with a very different problem, which Barsugli et al. (2013) coined the practitioner's dilemma: web-based portals now provide access to a proliferation of high-resolution climate projections. These products are based on different methods and assumptions and often provide contradictory results (Hewitson et al. 2014), but users do not have guidance to select appropriate data sets and use them wisely. According to Barsugli et al. (2013, p. 424), “products are sometimes selected on the basis of availability, convenience of format, and familiarity with the provider.” Obviously, there is a huge discrepancy between the expressed user needs and the actual choice of data.
At the interface between providers and users of regional climate information, also a provider's dilemma exists: demands by national and local governments, climate services providers, development banks or international aid organisations pressure the scientific community to operationalise the provision of regional climate projections – but, as will be discussed throughout this book, the science required to provide such information is still foundational research (Hewitson et al. 2014). In this context, scientists face questions about their responsibilities when delivering uncertain information which may be used for real-world decisions.
Frontmatter
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp i-iv
-
- Chapter
- Export citation
18 - Use of Downscaling in Practice
- from Part III - Downscaling in Practice and Outlook
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 269-280
-
- Chapter
- Export citation
-
Summary
Users of climate information, as broadly discussed in Chapter 5, require credible, defensible and actionable information as a basis for decision making. This chapter intends to close the circle and comes back to these initial requirements.
In Part I of the book, we discussed the broader context of climate projection uncertainties in the light of observational limitations as well as the skill and remaining shortcomings of dynamical climate models. In Part II, we introduced the range of statistical downscaling approaches and methods and discussed their structural skill and limitations as well as the assumptions underlying their use in climate projections. In Part III, finally, we have reviewed the performance of many of these methods in practical applications.
In the following sections, we synthesise the relevance of these issues for generating and providing useful regional climate projections. We assume a stance in the spirit of Mastrandrea et al. (2010), who highlight the need of bottom-up/top-down vulnerability assessments that bring together bottom-up knowledge of existing vulnerabilities with top-down climate-impact projections.
Hewitson et al. (2014) call for considering statistical downscaling in a wider landscape of climate information. In fact, users of climate information do not care about which method has been used to generate this information as long as it meets their requirements. We therefore explicitly assume a method-agnostic stance: we do not restrict ourselves to discussing statistical downscaling options but also present situations in which dynamical downscaling might be required or in which no downscaling might be needed at all.
Regional climate modelling might be purely curiosity driven. In this chapter, however, we assume a user-relevant context and thus a collaboration between regional climate modellers and stakeholders such as impact modellers or decision makers. Section 18.1 highlights the relevance to first assess whether climate and climate change are important factors in a given context. Sections 18.2 to 18.4 then summarise the discussions throughout the book about how to select suitable climate models and downscaling methods for a given application. The actual interpretation of the results in the context of projection uncertainties is the subject of Section 18.5. Section 18.6 emphasises the need for interdisciplinary collaboration but also critically discusses the difficulties of such efforts.
4 - Rationale of Downscaling
- from Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 24-32
-
- Chapter
- Export citation
-
Summary
In this chapter, we discuss the basic ideas, assumptions and concepts underlying downscaling. The concept itself is considered in Section 4.1. In different user contexts, different aspects of the climate system – expressed in statistical terms – will be relevant. We introduce these aspects in Section 4.2. Each downscaling model is based on a set of assumptions; these are presented in Section 4.3. But also the downscaled model itself has to fulfill specific requirements, as will be discussed in Section 4.4. In Section 4.5 we will discuss remaining issues such as added value.
What Is Downscaling?
As already introduced in Chapter 1, the main rationale and purpose of downscaling is to bridge the gap from the large spatial scales represented by GCMs to the smaller scales required for assessing regional climate change and its impacts. Dynamical downscaling employs regional climate models (RCMs) to simulate the atmosphere and its coupling with the land-surface at a higher resolution, but over a limited domain (Rummukainen 2010). Boundary conditions are taken from the driving GCM. Statistical downscaling derives empirical links between large and local scales and applies these to climate model output. The two main variants of statistical downscaling have already been sketched in Chapter 1; they will be introduced in more detail in Part II. For now only the basic difference is important: so-called perfect prognosis statistical models – essentially all regression and weather type methods – are calibrated against observed large-scale predictors and local-scale predictands. Under climate change, the statistical model is applied to predictors from a GCM. So-called model output statistics methods – essentially all bias correction methods – calibrate a transfer function between climate model simulations and observations in present climate, and apply this transfer function to future climate model simulations. Given that bias correction is often applied to RCMs rather than directly to GCMs, we will in the following sections discuss not only statistical but briefly also dynamical downscaling.
Generally speaking, downscaling uses additional information from regional or local scales that is not present in GCMs to derive information about regional climate and climate change, conditional on the driving GCM. RCMs resolve regional-scale processes and use regional information on the orography; statistical downscaling uses information on observed climate at selected locations.
Dedication
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp v-vi
-
- Chapter
- Export citation
3 - History of Downscaling
- from Part I - Background and Fundamentals
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 16-23
-
- Chapter
- Export citation
-
Summary
Downscaling in Weather Forecasting
The first downscaling methods had been invented already in the late 1940s (Klein 1948) and became operational in the early days of numerical weather prediction at the end of the 1950s. Back then, operational numerical weather prediction models were by far too coarse to predict local weather, and furthermore they did not forecast all variables of interest but only a few such as pressure and temperature. At that time a considerable network of observed weather time series was available already. Klein et al. (1959) employed this data to infer statistical relationships between the observed large-scale circulation – for those variables that were simulated by the models – and the observed local-scale weather variables of interest. The statistical model was then applied to downscale the actual numerical forecast of the large-scale circulation to a forecast of the local weather. The key assumption of this approach is that the large-scale predictor has been perfectly forecasted by the numerical model, hence the approach itself has been coined perfect prognosis (PP). After some years, a considerable database of past forecasts had been archived. Analyses of this data revealed that numerical forecasts even of the largescale weather were of course not perfect but showed systematic deviations compared to observations. Yet this database also became key to mitigate this problem: Glahn and Lowry (1972) developed a new approach that – during calibration – did not take the predictors from observations but from the archived numerical forecasts. For a new weather prediction, the inferred statistical link is then applied to the new numerical forecast. As this approach is basically a post-processing of numerical model data, it has been coined model output statistics (MOS). The key advantage of MOS is that it contains by construction a bias correction of the numerical model. Current weather prediction systems employ complex MOS approaches with several predictors that are continually recalibrated to provide the highest predictive skill.
In parallel, numerical approaches were developed to improve the resolution and accuracy of forecasts over a target region. The first limited-area model was developed at the US National Meteorological Center (Howcroft 1966, Gerrity and McPherson 1969) and became operational in 1971. This model covered the US, Canada and the Arctic Ocean at a horizontal resolution of 190.5km at 60◦N and was driven at the lateral boundaries with input from a Northern Hemisphere numerical weather prediction model.
10 - Structure of Statistical Downscaling Methods
- from Part II - Statistical Downscaling Concepts and Methods
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 135-140
-
- Chapter
- Export citation
Appendix A - Methods Used in This Book
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 290-292
-
- Chapter
- Export citation
-
Summary
Several plots in this book are based on evaluation results from the VALUE perfect predictor experiment. The following method descriptions are extracted from the compilation in Gutiérrez et al. (2017).
CFE
Bias correction at station Clausthal-Zellerfeld-Erbprinzentanne. Scaling: simple rescaling of daily intensities. Non-parametric quantile mapping: linear interpolation between neighbouring empirical quantiles. The parametric quantile mapping is based on a twoparameter gamma distribution.
RaiRat-M6/M7/9
Deterministic MOS method. Temperature bias correction methods used in Räisänen and Räty (2013). M6 additively corrects means, M7 additionally rescales the standard deviation. M9 is a semi-empirical quantile mapping, where the empirical transfer function is smoothed with a running mean. The transfer functions are calibrated for each calender month, using a two-month window centred on the month of interest.
Ratyetal-M6-M8
Deterministic MOS method. Precipitation bias correction methods used in Räty et al. (2014). M6 adjusts daily precipitation values by rescaling mean precipitation and separately rescaling anomalies about the mean. M7 adjusts daily precipitation by a powerlaw scaling. M9 is a parametric quantile mapping based on two different gamma distributions, fitted separately below and above the 95th percentile of daily precipitation on wet days. A 0.1mm threshold was used to define wet days. The transfer functions are calibrated for each calender month, using a three-month time window centred on the month of interest.
ANALOG
PP method. Standard analog technique using Euclidean distance considering the complete fields to compute distances (Gutiérrez et al. 2013, San-Martín et al. 2017). Candidate predictors are sea-level pressure, 2m temperature, temperature at 500hPa, 700hPa and 850hPa, specific humidity at 500hPa and 850hPa, and 500hPa geopotential height. The method has been trained across different zones covering Europe (similar to the Prudence regions) and has no seasonal component.
MLR-AAN/AAI/AAW/RSN/ASW/ASI
PP method. Pointwise multiple linear regression for temperature using gridpoint raw data (or standardised anomalies) as predictors (Huth 2002, Huth et al. 2015). The first letter of the code refers to the raw (R) or anomaly (A) data used as predictors, the second letter refers to the annual (A) or seasonal (S) training, and the third letter refers to inflation (I) or white noise (W) variance correction (N for no correction). For comparison, the method has also been applied to precipitation in VALUE. Predictors are sea-level pressure and temperature at 850hPa.
1 - Introduction
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 1-6
-
- Chapter
- Export citation
-
Summary
Climate is changing and will continue to change. Societies and ecosystems are affected by and often depend on climate and its variability. Already in 1992, the United Nations Framework Convention on Climate Change stated that all parties shall “cooperate in preparing for adaptation to the impacts of climate change” (United Nations 1992). Over the last decades, several countries have developed national adaptation strategies. The EU strategy on adaptation to climate change (European Commission 2013), for instance, acknowledges the need to take adaptation measures at all levels ranging from national to regional and local levels. The Global Framework for Climate Services (GFCS), established in 2009, sets out to develop and communicate climate information to “enable better management of the risks of climate variability and change and adaptation to climate change” (http://www.wmo.int/gfcs/vision). In short, there is an urgent demand for scientifically credible climate change information, in particular at the regional scale (Hewitt et al. 2012). One approach to obtain information about regional climate change is downscaling of global climate projections. In fact, a plethora of different data products have already been made available via internet portals.
Yet the provision of regional climate change information is one of the big challenges in climate science (Schiermeier 2010) and still a subject of essentially basic research (Hewitson et al. 2014). A Nature editorial prominently pointed out that “certainty is what current-generation regional studies cannot yet provide” (Nature 2010). Kundzewicz and Stakhiv (2010) argue that climate models have originally been developed to guide mitigation decisions. They could provide a broad picture of global climate change but would not yet be skillful to serve as input for regional adaptation planning. Kerr (2011b) brings forward a range of arguments which have been issued against current downscaling practice, and, in a later piece (Kerr 2011a), discusses the challenges of providing actionable climate information.
Against this background, the book at hand attempts to provide a reference for a range of approaches and methods often summarised as statistical downscaling. At the same time, the book aims to put the more technical issues of statistical downscaling into the broader context of user needs, regional climate modelling uncertainties and limitations, and good scientific practice. To begin with, we would like to sketch the scientific idea of statistical downscaling and then give some guidance on how to best approach this book.
13 - Weather Generators
- from Part II - Statistical Downscaling Concepts and Methods
- Douglas Maraun, Karl-Franzens-Universität Graz, Austria, Martin Widmann, University of Birmingham
-
- Book:
- Statistical Downscaling and Bias Correction for Climate Research
- Published online:
- 27 December 2017
- Print publication:
- 18 January 2018, pp 201-219
-
- Chapter
- Export citation
-
Summary
For many applications long weather time series are required (Richardson and Wright 1984): for instance, to drive impact models for deriving design values of hydraulic structures or to assess meteorological impacts on hydrology or agriculture. In practice, however, observational records are often short, suffer from gaps and inhomogeneities or are simply missing for some meteorological variables. Weather generators have been developed to synthetically generate weather time series of, at least in theory, infinite length.
DEFINITION 13.1 As weather generators we define stochastic models of meteorological variables that explicitly model their marginal distribution and temporal dependence.
Since the development of the first precipitation (Buishand 1977, Katz 1977) and weather generators (Richardson 1981), these models have become widely used to produce long surrogate time series, to impute missing data (e.g. Yang et al. 2005) and increasingly to downscale climate projections for impact assessment (e.g. Hulme et al. 2002).
In the following, we will first introduce the assumptions underlying the use of weather generators in climate change studies (Section 13.1), followed by an overview of the most widely used weather generator approaches (Section 13.2). Structural skill of weather generators will be discussed in Section 13.3. Finally, we will present approaches to incorporate climate change in weather generators in Section 13.4. The development of weather generators is – depending on how they incorporate climate change – similar to the PP and MOS approaches. For conditional weather generators, refer to the PP cookbook in Section 11.7; for change-factor weather generators, refer to the MOS cookbook in Section 12.10.
Assumptions
As will be discussed in detail in Section 13.4, weather generators can be used in two different settings to downscale climate projections: with so-called change factors – climate model–simulated changes in long-term climate statistics, which are used to modify the parameters of the weather generator – and with time series of meteorological predictors, taken from climate model simulations, that condition the weather generator parameters on a day-by-day basis and impose long-term changes.